6,559 research outputs found
Secondary Effects and Public Morality
The police power consists of the authority of the state to regulate in the interests of public health, safety, and morals. As American society continues to grow more diverse and pluralistic, courts and commentators have raised concerns that the last of these, public morality, cannot serve as an acceptable justification for regulatory action. Indeed, if appeals to public morality cannot be evaluated on an objective basis, then regulators might invoke them to conceal unlawful motives. The ability of moral reasoning to provide a legitimate basis for regulation is thrown into doubt. In this Article, we examine a peculiar line of Supreme Court cases in the free speech context that bring the problem into focus. In the so-called secondary effects cases, the Justices gradually moved away from accepting public morality arguments in support of state restrictions on adult businesses. In place of public morality, the Court began to retrain its focus on the social ills attendant to the activity in question, or what it termed the ?secondary effects? of such conduct. Rather than decide whether the regulated activity is immoral, and thus within the legitimate regulatory sweep of the police power as traditionally conceived, the Court instead looked to whether the state could show that its restriction reduced deleterious secondary effects associated with the activity. This development might have appeared desirable insofar as it would permit courts to rest their rulings on objective facts rather than wrestle with matters of opinion and moral sentiment. To the contrary, we argue, secondary effects arguments rely on moral reasoning –whether articulated or not– to the same extent as public morality arguments. The Court?s attempt in the secondary effects cases to avoid engaging in moral reasoning in reality demonstrates its indispensability.Fil: Legarre, Santiago. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; Argentina. University of Notre Dame-Indiana; Estados Unidos. Pontificia Universidad CatĂłlica Argentina "Santa MarĂa de los Buenos Aires"; ArgentinaFil: Mitchell, Gregory J. Notre Dame Law School; Estados Unido
How Jurors Evaluate Fingerprint Evidence: The Relative Importance of Match Language, Method Information, and Error Acknowledgment
Fingerprint examiners use a variety of terms and phrases to describe a finding of a match between a defendant\u27s fingerprints and fingerprint impressions collected from a crime scene. Despite the importance and ubiquity of fingerprint evidence in criminal cases, no prior studies examine how jurors evaluate such evidence. We present two studies examining the impact of different match phrases, method descriptions, and statements about possible examiner error on the weight given to fingerprint identification evidence by laypersons. In both studies, the particular phrase chosen to describe the finding of a match-whether simple and imprecise or detailed and claiming near certainty-had little effect on participants\u27 judgments about the guilt of a suspect. In contrast, the examiner admitting the possibility of error reduced the weight given to the fingerprint evidence-regardless of whether the admission was made during direct or cross-examination. In addition, the examiner providing information about the method used to make fingerprint comparisons reduced the impact of admitting the possibility of error. We found few individual differences in reactions to the fingerprint evidence across a wide range of participant variables, and we found widespread agreement regarding the uniqueness of fingerprints and the reliability of fingerprint identifications. Our results suggest that information about the reliability of fingerprint identifications will have a greater impact on lay interpretations of fingerprint evidence than the specific qualitative or quantitative terms chosen to describe a fingerprint match
How Jurors Evaluate Fingerprint Evidence: The Relative Importance of Match Language, Method Information, and Error Acknowledgment
Fingerprint examiners use a variety of terms and phrases to describe a finding of a match between a defendant\u27s fingerprints and fingerprint impressions collected from a crime scene. Despite the importance and ubiquity of fingerprint evidence in criminal cases, no prior studies examine how jurors evaluate such evidence. We present two studies examining the impact of different match phrases, method descriptions, and statements about possible examiner error on the weight given to fingerprint identification evidence by laypersons. In both studies, the particular phrase chosen to describe the finding of a match-whether simple and imprecise or detailed and claiming near certainty-had little effect on participants\u27 judgments about the guilt of a suspect. In contrast, the examiner admitting the possibility of error reduced the weight given to the fingerprint evidence-regardless of whether the admission was made during direct or cross-examination. In addition, the examiner providing information about the method used to make fingerprint comparisons reduced the impact of admitting the possibility of error. We found few individual differences in reactions to the fingerprint evidence across a wide range of participant variables, and we found widespread agreement regarding the uniqueness of fingerprints and the reliability of fingerprint identifications. Our results suggest that information about the reliability of fingerprint identifications will have a greater impact on lay interpretations of fingerprint evidence than the specific qualitative or quantitative terms chosen to describe a fingerprint match
The Proficiency of Experts
Expert evidence plays a crucial role in civil and criminal litigation. Changes in the rules concerning expert admissibility, following the Supreme Court\u27s Daubert ruling, strengthened judicial review of the reliability and the validity of an expert\u27s methods. Judges and scholars, however, have neglected the threshold question for expert evidence: whether a person should be qualified as an expert in the first place. Judges traditionally focus on credentials or experience when qualifying experts without regard to whether those criteria are good proxies for true expertise. We argue that credentials and experience are often poor proxies for proficiency. Qualification of an expert presumes that the witness can perform in a particular domain with a proficiency that non-experts cannot achieve, yet many experts cannot provide empirical evidence that they do in fact perform at high levels of proficiency. To demonstrate the importance ofproficiency data, we collect and analyze two decades of proficiency testing of latent fingerprint examiners. In this important domain, we found surprisingly high rates of false positive identifications for the period 1995 to 2016. These data would qualify the claims of many fingerprint examiners regarding their near infallibility, but unfortunately, judges do not seek out such information. We survey the federal and state case law and show how judges typically accept expert credentials as a proxy for proficiency in lieu of direct proof of proficiency. Indeed, judges often reject parties\u27 attempts to obtain and introduce at trial empirical data on an expert\u27s actual proficiency. We argue that any expert who purports to give falsifiable opinions can be subjected to proficiency testing and that proficiency testing is the only objective means of assessing the accuracy and reliability ofexperts who rely on subjective judgments to formulate their opinions (so-called black-box experts ). Judges should use proficiency data to make expert qualification decisions when the data is available, should demand proof of proficiency before qualifying black-box experts, and should admit at trial proficiency data for any qualified expert. We seek to revitalize the standard for qualifying experts: expertise should equal proficiency
Recommended from our members
Exact and approximate boundary data interpolation in the finite element method
Matching boundary data exactly in an elliptic problem avoids one of Strang's "variational crimes". (Strang and Fix (1973)). Supporting numerical evidence for this procedure is given by Marshall and Mitchell (1973), who considered the solution of Laplace's equation with Dirichlet boundary data by bilinear elements over squares and measured the errors in the L2 norm. Then Marshall and Mitchell (1978) obtained some surprising results: for certain triangular elements, matching the boundary data exactly produced worse results than the usual procedure of interpolating the boundary data
Rateless Coding for Gaussian Channels
A rateless code-i.e., a rate-compatible family of codes-has the property that
codewords of the higher rate codes are prefixes of those of the lower rate
ones. A perfect family of such codes is one in which each of the codes in the
family is capacity-achieving. We show by construction that perfect rateless
codes with low-complexity decoding algorithms exist for additive white Gaussian
noise channels. Our construction involves the use of layered encoding and
successive decoding, together with repetition using time-varying layer weights.
As an illustration of our framework, we design a practical three-rate code
family. We further construct rich sets of near-perfect rateless codes within
our architecture that require either significantly fewer layers or lower
complexity than their perfect counterparts. Variations of the basic
construction are also developed, including one for time-varying channels in
which there is no a priori stochastic model.Comment: 18 page
40.000 Garotas Desaparecidas: Espetáculo Falacioso, PolĂticas Sexuais Fora da Ordem e ViolĂŞncia Policial no Rio de Janeiro
Este artigo analisa o perĂodo do assim chamado protestos da “Revolução do Vinagre”, compreendido entre a Copa das Confederações de 2013 e a Copa do Mundo de 2014. Alega-se que nesse perĂodo há uma mudança acentuada nas polĂticas sexuais enquanto que as “polĂticas sexuais fora da ordem” deixam de ser aceitas e integradas nas manifestações populares de 2013 para serem subordinadas e atĂ© mesmo abandonadas durante a Copa do Mundo, visto que a violĂŞncia policial contra as profissionais do sexo acelerou, Ă medida que os protestos populares diminuĂram. Uma das alegações retĂłricas que causou, em especial, prejuĂzo Ă s profissionais do sexo foi a alegação de que 40.000 mulheres e garotas sĂŁo traficadas para cada Copa do Mundo — uma alegação que se mostrou completamente falsa. Contudo, essas garotas desaparecidas — que sĂŁo imaginárias — tambĂ©m confirmou-se como sendo um tipo de espetáculo, sendo que esse espetáculo falacioso e dissimulado permitiu o aumento da violĂŞncia policial. Este artigo argumenta que as polĂticas sexuais fora da ordem sĂŁo remodeladas e se tornam vulneráveis a espetáculos — mesmo quando o espetáculo em questĂŁo Ă© mais notável por sua invalidez etnográfica
- …